Please consider donating to the Computer Science Club to help offset the costs of bringing you our talks.
Abstract
Deep networks can be learned efficiently from unlabeled data. The layers of representation are learned one at a time using a simple learning module, called a "Restricted Boltzmann Machine" that has only one layer of latent variables. The values of the latent variables of one module form the data for training the next module. Although deep networks have been quite successful for tasks such as object recognition, information retrieval, and modeling motion capture data, the simple learning modules do not have multiplicative interactions which are very useful for some types of data.
The talk will show how a third-order energy function can be factorized to yield a simple learning module that retains advantageous properties of a Restricted Boltzmann Machine such as very simple exact inference and a very simple learning rule based on pair-wise statistics. The new module contains multiplicative interactions that are useful for a variety of unsupervised learning tasks. Researchers at the University of Toronto have been using this type of module to extract oriented energy from image patches and dense flow fields from image sequences. The new module can also be used to allow motions of a particular style to be achieved by blending autoregressive models of motion capture data.
View
Get the Flash Player to see this video using Flash Player.
Download
BitTorrent: Talk (XviD) | Talk (Ogg/Theora) | Talk (MP4) | Talk (MPG)
HTTP (web browser): Talk (XviD) | Talk (Ogg/Theora) | Talk (MP4) | Talk (MPG)
Please consider donating to the Computer Science Club to help offset the costs of bringing you our talks.